intra-class response variance
Neuron with Steady Response Leads to Better Generalization
Because the deep learning models for the classification task always have a normalization operation (e.g., Softmax) to make the final unconstrained These authors contributed equally to the work. Work performed during the internship at MSRA. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Complexity Measure C will be a positive number in those local minima. Measure C will be 0. The Lemma is proven. C.1 Open Source Code We publish our code in Github (i.e., https://github.
- Asia > China > Beijing > Beijing (0.06)
- North America > United States > Michigan (0.05)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > China > Beijing > Beijing (0.05)
- North America > United States > Michigan (0.04)
Neuron with Steady Response Leads to Better Generalization
Because the deep learning models for the classification task always have a normalization operation (e.g., Softmax) to make the final unconstrained These authors contributed equally to the work. Work performed during the internship at MSRA. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Complexity Measure C will be a positive number in those local minima. Measure C will be 0. The Lemma is proven. C.1 Open Source Code We publish our code in Github (i.e., https://github.
- Asia > China > Beijing > Beijing (0.06)
- North America > United States > Michigan (0.05)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > China > Beijing > Beijing (0.05)
- North America > United States > Michigan (0.04)
Neuron with Steady Response Leads to Better Generalization
Fu, Qiang, Du, Lun, Mao, Haitao, Chen, Xu, Fang, Wei, Han, Shi, Zhang, Dongmei
Regularization can mitigate the generalization gap between training and inference by introducing inductive bias. Existing works have already proposed various inductive biases from diverse perspectives. However, none of them explores inductive bias from the perspective of class-dependent response distribution of individual neurons. In this paper, we conduct a substantial analysis of the characteristics of such distribution. Based on the analysis results, we articulate the Neuron Steadiness Hypothesis: the neuron with similar responses to instances of the same class leads to better generalization. Accordingly, we propose a new regularization method called Neuron Steadiness Regularization (NSR) to reduce neuron intra-class response variance. Based on the Complexity Measure, we theoretically guarantee the effectiveness of NSR for improving generalization. We conduct extensive experiments on Multilayer Perceptron, Convolutional Neural Networks, and Graph Neural Networks with popular benchmark datasets of diverse domains, which show that our Neuron Steadiness Regularization consistently outperforms the vanilla version of models with significant gain and low additional computational overhead.
- Asia > China > Beijing > Beijing (0.05)
- North America > United States > Michigan (0.04)
- North America > Canada > Ontario > Toronto (0.04)